Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
1.
Technol Health Care ; 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38607777

ABSTRACT

BACKGROUND: In recent times, there has been widespread deployment of Internet of Things (IoT) applications, particularly in the healthcare sector, where computations involving user-specific data are carried out on cloud servers. However, the network nodes in IoT healthcare are vulnerable to an increased level of security threats. OBJECTIVE: This paper introduces a secure Electronic Health Record (EHR) framework with a focus on IoT. METHODS: Initially, the IoT sensor nodes are designated as registered patients and undergo initialization. Subsequently, a trust evaluation is conducted, and the clustering of trusted nodes is achieved through the application of Tasmanian Devil Optimization (STD-TDO) utilizing the Student's T-Distribution. Utilizing the Transposition Cipher-Squared random number generator-based-Elliptic Curve Cryptography (TCS-ECC), the clustered nodes encrypt four types of sensed patient data. The resulting encrypted data undergoes hashing and is subsequently added to the blockchain. This configuration functions as a network, actively monitored to detect any external attacks. To accomplish this, a feature reputation score is calculated for the network's features. This score is then input into the Swish Beta activated-Recurrent Neural Network (SB-RNN) model to classify potential attacks. The latest transactions on the blockchain are scrutinized using the Neutrosophic Vague Set Fuzzy (NVS-Fu) algorithm to identify any double-spending attacks on non-compromised nodes. Finally, genuine nodes are granted permission to decrypt medical records. RESULTS: In the experimental analysis, the performance of the proposed methods was compared to existing models. The results demonstrated that the suggested approach significantly increased the security level to 98%, reduced attack detection time to 1300 ms, and maximized accuracy to 98%. Furthermore, a comprehensive comparative analysis affirmed the reliability of the proposed model across all metrics. CONCLUSION: The proposed healthcare framework's efficiency is proved by the experimental evaluation.

2.
J Nephrol ; 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38564072

ABSTRACT

BACKGROUND: There is limited evidence to support definite clinical outcomes of direct oral anticoagulant (DOAC) therapy in chronic kidney disease (CKD). By identifying the important variables associated with clinical outcomes following DOAC administration in patients in different stages of CKD, this study aims to assess this evidence gap. METHODS: An anonymised dataset comprising 97,413 patients receiving DOAC therapy in a tertiary health setting was systematically extracted from the multidimensional electronic health records and prepared for analysis. Machine learning classifiers were applied to the prepared dataset to select the important features which informed covariate selection in multivariate logistic regression analysis. RESULTS: For both CKD and non-CKD DOAC users, features such as length of stay, treatment days, and age were ranked highest for relevance to adverse outcomes like death and stroke. Patients with Stage 3a CKD had significantly higher odds of ischaemic stroke (OR 2.45, 95% Cl: 2.10-2.86; p = 0.001) and lower odds of all-cause mortality (OR 0.87, 95% Cl: 0.79-0.95; p = 0.001) on apixaban therapy. In patients with CKD (Stage 5) receiving apixaban, the odds of death were significantly lowered (OR 0.28, 95% Cl: 0.14-0.58; p = 0.001), while the effect on ischaemic stroke was insignificant. CONCLUSIONS: A positive effect of DOAC therapy was observed in advanced CKD. Key factors influencing clinical outcomes following DOAC administration in patients in different stages of CKD were identified. These are crucial for designing more advanced studies to explore safer and more effective DOAC therapy for the population.

3.
J Community Hosp Intern Med Perspect ; 14(1): 13-17, 2024.
Article in English | MEDLINE | ID: mdl-38482076

ABSTRACT

Background: Fecal occult blood tests (FOBT) are inappropriately used in patients with melena, hematochezia, coffee ground emesis, iron deficiency anemia, and diarrhea. The use of FOBT for reasons other than screening for colorectal cancer is considered low-value and unnecessary. Methods: Quality Improvement Project that utilized education, Best Practice Advisory (BPA) and modification of order sets in the electronic health record (EHR). The interventions were done in a sequential order based on the Plan-Do-Study-Act (PDSA) method. An annotated run chart was used to analyze the collected data. Results: Education and Best Practice Advisory within the EHR led to significant reduction in the use of FOBT in the ED. The interventions eventually led to a consensus and removal of FOBT from the order set of the EHR for patients in the ED and hospital units. Conclusions: The use of electronic BPA, education and modification of order sets in the EHR can be effective at de-implementing unnecessary tests and procedures like FOBT in the ED and hospital units.

4.
Ann Fam Med ; 22(1): 12-18, 2024.
Article in English | MEDLINE | ID: mdl-38253499

ABSTRACT

PURPOSE: The purpose of this study is to evaluate recent trends in primary care physician (PCP) electronic health record (EHR) workload. METHODS: This longitudinal study observed the EHR use of 141 academic PCPs over 4 years (May 2019 to March 2023). Ambulatory full-time equivalency (aFTE), visit volume, and panel size were evaluated. Electronic health record time and inbox message volume were measured per 8 hours of scheduled clinic appointments. RESULTS: From the pre-COVID-19 pandemic year (May 2019 to February 2020) to the most recent study year (April 2022 to March 2023), the average time PCPs spent in the EHR per 8 hours of scheduled clinic appointments increased (+28.4 minutes, 7.8%), as did time in orders (+23.1 minutes, 58.9%), inbox (+14.0 minutes, 24.4%), chart review (+7.2 minutes, 13.0%), notes (+2.9 minutes, 2.3%), outside scheduled hours on days with scheduled appointments (+6.4 minutes, 8.2%), and on unscheduled days (+13.6 minutes, 19.9%). Primary care physicians received more patient medical advice requests (+5.4 messages, 55.5%) and prescription messages (+2.3, 19.5%) per 8 hours of scheduled clinic appointments, but fewer patient calls (-2.8, -10.5%) and results messages (-0.3, -2.7%). While total time in the EHR continued to increase in the final study year (+7.7 minutes, 2.0%), inbox time decreased slightly from the year prior (-2.2 minutes, -3.0%). Primary care physicians' average aFTE decreased 5.2% from 0.66 to 0.63 over 4 years. CONCLUSIONS: Primary care physicians' time in the EHR continues to grow. While PCPs' inbox time may be stabilizing, it is still substantially higher than pre-pandemic levels. It is imperative health systems develop strategies to change the EHR workload trajectory to minimize PCPs' occupational stress and mitigate unnecessary reductions in effective physician workforce resulting from the increased EHR burden.


Subject(s)
Electronic Health Records , Physicians, Primary Care , Humans , Longitudinal Studies , Pandemics , Workload
5.
Front Public Health ; 11: 1324228, 2023.
Article in English | MEDLINE | ID: mdl-38249396

ABSTRACT

Background: The construction of medical consortiums not only promotes active cooperation among hospitals, but also further intensifies active competition among them. The shared use of electronic health records (EHR) breaks the original pattern of benefit distribution among hospitals. Objective: The purpose of this paper is to establish an incentive mechanism for the shared use EHR, and to reveal the incentive effect and mechanism of key factors, and to put forward management suggestions for solving the real conflicts. Methods: We constructed a basic incentive model and an incentive model that introduces performance evaluation as a supervisory signal, based on analyzing the hospital cost function, the hospital benefit function, and the incentive contract function. Finally, the incentive effects of key factors before and after the introduction of performance evaluation were verified and compared using MATLAB simulation method. Results: The profit level and incentive coefficient of hospitals sharing EHR are independent of the amount of one-time government subsidies. Regardless of whether a performance evaluation supervisory signal is introduced or not, the incentive coefficients are increasing functions with respect to ρ, τ, but decreasing functions with respect to ß, δ, γ. After the inclusion of supervisory signal of performance evaluation in the model, the ability of hospitals to use EHR has a higher impact effectiveness on improving both incentive effects and benefit levels. The impact of the value-added coefficient on the level of earnings is consistently greater than it would have been without the inclusion of the performance evaluation supervisory signal. Conclusions: Enhancing the capacity of hospitals to use EHR and tapping and expanding the value-added space of EHR are 2 key paths to promote sustainable shared use of EHR. Substantive performance evaluation plays an important role in stabilizing incentive effects.


Subject(s)
Electronic Health Records , Motivation , Computer Simulation , Hospital Costs , Hospitals
6.
AMIA Annu Symp Proc ; 2023: 814-823, 2023.
Article in English | MEDLINE | ID: mdl-38222389

ABSTRACT

In the era of big data, there is an increasing need for healthcare providers, communities, and researchers to share data and collaborate to improve health outcomes, generate valuable insights, and advance research. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law designed to protect sensitive health information by defining regulations for protected health information (PHI). However, it does not provide efficient tools for detecting or removing PHI before data sharing. One of the challenges in this area of research is the heterogeneous nature of PHI fields in data across different parties. This variability makes rule-based sensitive variable identification systems that work on one database fail on another. To address this issue, our paper explores the use of machine learning algorithms to identify sensitive variables in structured data, thus facilitating the de-identification process. We made a key observation that the distributions of metadata of PHI fields and non-PHI fields are very different. Based on this novel finding, we engineered over 30 features from the metadata of the original features and used machine learning to build classification models to automatically identify PHI fields in structured Electronic Health Record (EHR) data. We trained the model on a variety of large EHR databases from different data sources and found that our algorithm achieves 99% accuracy when detecting PHI-related fields for unseen datasets. The implications of our study are significant and can benefit industries that handle sensitive data.


Subject(s)
Confidentiality , Medical Records Systems, Computerized , United States , Humans , Health Insurance Portability and Accountability Act , Algorithms , Machine Learning , Electronic Health Records
7.
Front Digit Health ; 5: 1275711, 2023.
Article in English | MEDLINE | ID: mdl-38034906

ABSTRACT

Objectives: The development of a standardized technical framework for exchanging electronic health records is widely recognized as a challenging endeavor that necessitates appropriate technological, semantic, organizational, and legal interventions to support the continuity of health and care. In this context, this study delineates a pan-European hackathon aimed at evaluating the efforts undertaken by member states of the European Union to develop a European electronic health record exchange format. This format is intended to facilitate secure cross-border healthcare and optimize service delivery to citizens, paving the way toward a unified European health data space. Methods: The hackathon was conducted within the scope of the X-eHealth project. Interested parties were initially presented with a representative clinical scenario and a set of specifications pertaining to the European electronic health record exchange format, encompassing Laboratory Results Reports, Medical Imaging and Reports, and Hospital Discharge Reports. In addition, five onboarding webinars and two professional training events were organized to support the participating entities. To ensure a minimum acceptable quality threshold, a set of inclusion criteria for participants was outlined for the interested teams. Results: Eight teams participated in the hackathon, showcasing state-of-the-art applications. These teams utilized technologies such as Health Level Seven-Fast Healthcare Interoperability Resources (HL7 FHIR) and Clinical Document Architecture (CDA), alongside pertinent IHE integration profiles. They demonstrated a range of complementary uses and practices, contributing substantial inputs toward the development of future-proof electronic health record management systems. Conclusions: The execution of the hackathon demonstrated the efficacy of such approaches in uniting teams from diverse backgrounds to develop state-of-the-art applications. The outcomes produced by the event serve as proof-of-concept demonstrators for managing and preventing chronic diseases, delivering value to citizens, companies, and the research community.

8.
Bioengineering (Basel) ; 10(11)2023 Nov 10.
Article in English | MEDLINE | ID: mdl-38002431

ABSTRACT

BACKGROUND: Although electronic health records (EHR) provide useful insights into disease patterns and patient treatment optimisation, their reliance on unstructured data presents a difficulty. Echocardiography reports, which provide extensive pathology information for cardiovascular patients, are particularly challenging to extract and analyse, because of their narrative structure. Although natural language processing (NLP) has been utilised successfully in a variety of medical fields, it is not commonly used in echocardiography analysis. OBJECTIVES: To develop an NLP-based approach for extracting and categorising data from echocardiography reports by accurately converting continuous (e.g., LVOT VTI, AV VTI and TR Vmax) and discrete (e.g., regurgitation severity) outcomes in a semi-structured narrative format into a structured and categorised format, allowing for future research or clinical use. METHODS: 135,062 Trans-Thoracic Echocardiogram (TTE) reports were derived from 146967 baseline echocardiogram reports and split into three cohorts: Training and Validation (n = 1075), Test Dataset (n = 98) and Application Dataset (n = 133,889). The NLP system was developed and was iteratively refined using medical expert knowledge. The system was used to curate a moderate-fidelity database from extractions of 133,889 reports. A hold-out validation set of 98 reports was blindly annotated and extracted by two clinicians for comparison with the NLP extraction. Agreement, discrimination, accuracy and calibration of outcome measure extractions were evaluated. RESULTS: Continuous outcomes including LVOT VTI, AV VTI and TR Vmax exhibited perfect inter-rater reliability using intra-class correlation scores (ICC = 1.00, p < 0.05) alongside high R2 values, demonstrating an ideal alignment between the NLP system and clinicians. A good level (ICC = 0.75-0.9, p < 0.05) of inter-rater reliability was observed for outcomes such as LVOT Diam, Lateral MAPSE, Peak E Velocity, Lateral E' Velocity, PV Vmax, Sinuses of Valsalva and Ascending Aorta diameters. Furthermore, the accuracy rate for discrete outcome measures was 91.38% in the confusion matrix analysis, indicating effective performance. CONCLUSIONS: The NLP-based technique yielded good results when it came to extracting and categorising data from echocardiography reports. The system demonstrated a high degree of agreement and concordance with clinician extractions. This study contributes to the effective use of semi-structured data by providing a useful tool for converting semi-structured text to a structured echo report that can be used for data management. Additional validation and implementation in healthcare settings can improve data availability and support research and clinical decision-making.

9.
Cancer ; 130(1): 60-67, 2024 Jan 01.
Article in English | MEDLINE | ID: mdl-37851512

ABSTRACT

BACKGROUND: A lack of onsite clinical trials is the largest barrier to participation of cancer patients in trials. Development of an automated process for regional trial eligibility screening first requires identification of patient electronic health record data that allows effective trial screening, and evidence that searching for trials regionally has a positive impact compared with site-specific searching. METHODS: To assess a screening framework that would support an automated regional search tool, a set of patient clinical variables was analyzed for prescreening clinical trials. The variables were used to assess regional compared with site-specific screening throughout the United States. RESULTS: Eight core variables from patient electronic health records were identified that yielded likely matches in a prescreen process. Assessment of the screening framework was performed using these variables to search for trials locally and regionally for an 84-patient cohort. The likelihood that a trial returned in this prescreen was a provisional trial match was 45.7%. Expanding the search radius to 20 miles led to a net 91% increase in matches across cancers within the tested cohort. In a U.S. regional analysis, for sparsely populated areas, searching a 100-mile radius using the prescreening framework was needed, whereas for urban areas a 20-mile radius was sufficient. CONCLUSION: A clinical trial screening framework was assessed that uses limited patient data to efficiently and effectively identify prescreen matches for clinical trials. This framework improves trial matching rates when searching regionally compared with locally, although the applicability of this framework may vary geographically depending on oncology practice density. PLAIN LANGUAGE SUMMARY: Clinical trials provide cancer patients the opportunity to participate in research and development of new drugs and treatment approaches. It can be difficult to find available clinical trials for which a patient is eligible. This article describes an approach to clinical trial matching using limited patient data to search for trials regionally, beyond just the patient's local care site. Feasibility testing shows that this process can lead to a net 91% increase in the number of potential clinical trial matches available within 20 miles of a patient. Based on these findings, a software tool based on this model is being developed that will automatically send limited, deidentified information from patient medical records to services that can identify possible clinical trials within a given region.


Subject(s)
Neoplasms , Humans , Electronic Health Records , Eligibility Determination , Feasibility Studies , Neoplasms/diagnosis , Neoplasms/therapy , Patient Selection , Clinical Trials as Topic
10.
J Biomed Inform ; 147: 104522, 2023 11.
Article in English | MEDLINE | ID: mdl-37827476

ABSTRACT

OBJECTIVE: Audit logs in electronic health record (EHR) systems capture interactions of providers with clinical data. We determine if machine learning (ML) models trained using audit logs in conjunction with clinical data ("observational supervision") outperform ML models trained using clinical data alone in clinical outcome prediction tasks, and whether they are more robust to temporal distribution shifts in the data. MATERIALS AND METHODS: Using clinical and audit log data from Stanford Healthcare, we trained and evaluated various ML models including logistic regression, support vector machine (SVM) classifiers, neural networks, random forests, and gradient boosted machines (GBMs) on clinical EHR data, with and without audit logs for two clinical outcome prediction tasks: major adverse kidney events within 120 days of ICU admission (MAKE-120) in acute kidney injury (AKI) patients and 30-day readmission in acute stroke patients. We further tested the best performing models using patient data acquired during different time-intervals to evaluate the impact of temporal distribution shifts on model performance. RESULTS: Performance generally improved for all models when trained with clinical EHR data and audit log data compared with those trained with only clinical EHR data, with GBMs tending to have the overall best performance. GBMs trained with clinical EHR data and audit logs outperformed GBMs trained without audit logs in both clinical outcome prediction tasks: AUROC 0.88 (95% CI: 0.85-0.91) vs. 0.79 (95% CI: 0.77-0.81), respectively, for MAKE-120 prediction in AKI patients, and AUROC 0.74 (95% CI: 0.71-0.77) vs. 0.63 (95% CI: 0.62-0.64), respectively, for 30-day readmission prediction in acute stroke patients. The performance of GBM models trained using audit log and clinical data degraded less in later time-intervals than models trained using only clinical data. CONCLUSION: Observational supervision with audit logs improved the performance of ML models trained to predict important clinical outcomes in patients with AKI and acute stroke, and improved robustness to temporal distribution shifts.


Subject(s)
Acute Kidney Injury , Stroke , Humans , Electronic Health Records , Hospitalization , Prognosis
11.
Front Artif Intell ; 6: 1224529, 2023.
Article in English | MEDLINE | ID: mdl-37396971
12.
Front Digit Health ; 5: 1150687, 2023.
Article in English | MEDLINE | ID: mdl-37342866

ABSTRACT

Endometriosis is a chronic, complex disease for which there are vast disparities in diagnosis and treatment between sociodemographic groups. Clinical presentation of endometriosis can vary from asymptomatic disease-often identified during (in)fertility consultations-to dysmenorrhea and debilitating pelvic pain. Because of this complexity, delayed diagnosis (mean time to diagnosis is 1.7-3.6 years) and misdiagnosis is common. Early and accurate diagnosis of endometriosis remains a research priority for patient advocates and healthcare providers. Electronic health records (EHRs) have been widely adopted as a data source in biomedical research. However, they remain a largely untapped source of data for endometriosis research. EHRs capture diverse, real-world patient populations and care trajectories and can be used to learn patterns of underlying risk factors for endometriosis which, in turn, can be used to inform screening guidelines to help clinicians efficiently and effectively recognize and diagnose the disease in all patient populations reducing inequities in care. Here, we provide an overview of the advantages and limitations of using EHR data to study endometriosis. We describe the prevalence of endometriosis observed in diverse populations from multiple healthcare institutions, examples of variables that can be extracted from EHRs to enhance the accuracy of endometriosis prediction, and opportunities to leverage longitudinal EHR data to improve our understanding of long-term health consequences for all patients.

13.
Health Syst (Basingstoke) ; 12(2): 223-242, 2023.
Article in English | MEDLINE | ID: mdl-37234469

ABSTRACT

The widespread use of Blockchain technology (BT) in nations that are developing remains in its early stages, necessitating a more comprehensive evaluation using efficient and adaptable approaches. The need for digitalization to boost operational effectiveness is growing in the healthcare sector. Despite BT's potential as a competitive option for the healthcare sector, insufficient research has prevented it being fully utilised. This study intends to identify the main sociological, economical, and infrastructure obstacles to BT adoption in developing nations' public health systems. To accomplish this goal, the study employs a multi-level analysis of blockchain hurdles using hybrid approach. The study's findings provide decision- makers with guidance on how to proceed, as well as insight into implementation challenges.

14.
Stud Health Technol Inform ; 302: 192-196, 2023 May 18.
Article in English | MEDLINE | ID: mdl-37203645

ABSTRACT

The high investments in deploying a new Electronic Health Record (EHR) make it necessary to understand its effect on usability (effectiveness, efficiency, and user satisfaction). This paper describes the evaluation process related to user satisfaction over data gathered from three Northern Norway Health Trust hospitals. A questionnaire gathered responses about user satisfaction regarding the newly adopted EHR. A regression model reduces the number of satisfaction items from 15 to nine, where the result represents user EHR Features Satisfaction. The results show positive satisfaction with the newly introduced EHR, a result of proper EHR transition planning and the previous experience of the vendor with the hospitals involved.


Subject(s)
Electronic Health Records , User-Computer Interface , Hospitals , Personal Satisfaction , Commerce
15.
Front Neurol ; 14: 1108222, 2023.
Article in English | MEDLINE | ID: mdl-37153672

ABSTRACT

Objective: We retrospectively screened 350,116 electronic health records (EHRs) to identify suspected patients for Pompe disease. Using these suspected patients, we then describe their phenotypical characteristics and estimate the prevalence in the respective population covered by the EHRs. Methods: We applied Symptoma's Artificial Intelligence-based approach for identifying rare disease patients to retrospective anonymized EHRs provided by the "University Hospital Salzburg" clinic group. Within 1 month, the AI screened 350,116 EHRs reaching back 15 years from five hospitals, and 104 patients were flagged as probable for Pompe disease. Flagged patients were manually reviewed and assessed by generalist and specialist physicians for their likelihood for Pompe disease, from which the performance of the algorithms was evaluated. Results: Of the 104 patients flagged by the algorithms, generalist physicians found five "diagnosed," 10 "suspected," and seven patients with "reduced suspicion." After feedback from Pompe disease specialist physicians, 19 patients remained clinically plausible for Pompe disease, resulting in a specificity of 18.27% for the AI. Estimating from the remaining plausible patients, the prevalence of Pompe disease for the greater Salzburg region [incl. Bavaria (Germany), Styria (Austria), and Upper Austria (Austria)] was one in every 18,427 people. Phenotypes for patient cohorts with an approximated onset of symptoms above or below 1 year of age were established, which correspond to infantile-onset Pompe disease (IOPD) and late-onset Pompe disease (LOPD), respectively. Conclusion: Our study shows the feasibility of Symptoma's AI-based approach for identifying rare disease patients using retrospective EHRs. Via the algorithm's screening of an entire EHR population, a physician had only to manually review 5.47 patients on average to find one suspected candidate. This efficiency is crucial as Pompe disease, while rare, is a progressively debilitating but treatable neuromuscular disease. As such, we demonstrated both the efficiency of the approach and the potential of a scalable solution to the systematic identification of rare disease patients. Thus, similar implementation of this methodology should be encouraged to improve care for all rare disease patients.

16.
Front Pharmacol ; 14: 1110036, 2023.
Article in English | MEDLINE | ID: mdl-36825151

ABSTRACT

Objectives: To describe the sex and gender differences in the treatment initiation and in the socio-demographic and clinical characteristics of all patients initiating an oral anticoagulant (OAC), and the sex and gender differences in prescribed doses and adherence and persistence to the treatment of those receiving direct oral anticoagulants (DOAC). Material and methods: Cohort study including patients with non-valvular atrial fibrillation (NVAF) who initiated OAC in 2011-2020. Data proceed from SIDIAP, Information System for Research in Primary Care, in Catalonia, Spain. Results: 123,250 people initiated OAC, 46.9% women and 53.1% men. Women were older and the clinical characteristics differed between genders. Women had higher risk of stroke than men at baseline, were more frequently underdosed with DOAC and discontinued the DOAC less frequently than men. Conclusion: We described the dose adequacy of patients receiving DOAC, finding a high frequency of underdosing, and significantly higher in women in comparison with men. Adherence was generally high, only with higher levels in women for rivaroxaban. Persistence during the first year of treatment was also high in general, being significantly more persistent women than men in the case of dabigatran and edoxaban. Dose inadequacy, lack of adherence and of persistence can result in less effective and safe treatments. It is necessary to conduct studies analysing sex and gender differences in health and disease.

17.
Front Glob Womens Health ; 3: 1006425, 2022.
Article in English | MEDLINE | ID: mdl-36741297

ABSTRACT

Women have historically been underrepresented in cardiovascular clinical trials, resulting in a lack of sex-specific data. This is especially problematic in two situations, namely those where diseases manifest differently in women and men and those where biological differences between the sexes might affect the efficacy and/or safety of medication. There is therefore a pressing need for datasets with proper representation of women to address questions related to these situations. Clinical care data could fit this bill nicely because of their unique broad scope across both patient groups and clinical measures. This perspective piece presents the potential of clinical care data in sex differences research and discusses current challenges clinical care data-based research faces. It also suggests strategies to reduce the effect of these limitations, and explores whether clinical care data alone will be sufficient to close evidence gaps or whether a more comprehensive approach is needed.

18.
Front Dent Med ; 32022.
Article in English | MEDLINE | ID: mdl-36643095

ABSTRACT

Background: The objective of this study was to build models that define variables contributing to pneumonia risk by applying supervised Machine Learning-(ML) to medical and oral disease data to define key risk variables contributing to pneumonia emergence for any pneumonia/pneumonia subtypes. Methods: Retrospective medical and dental data were retrieved from Marshfield Clinic Health System's data warehouse and integrated electronic medical-dental health records (iEHR). Retrieved data were pre-processed prior to conducting analyses and included matching of cases to controls by (a) race/ethnicity and (b) 1:1 Case: Control ratio. Variables with >30% missing data were excluded from analysis. Datasets were divided into four subsets: (1) All Pneumonia (all cases and controls); (2) community (CAP)/healthcare associated (HCAP) pneumonias; (3) ventilator-associated (VAP)/hospital-acquired (HAP) pneumonias and (4) aspiration pneumonia (AP). Performance of five algorithms were compared across the four subsets: Naïve Bayes, Logistic Regression, Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) and Random Forests. Feature (input variables) selection and ten-fold cross validation was performed on all the datasets. An evaluation set (10%) was extracted from the subsets for further validation. Model performance was evaluated in terms of total accuracy, sensitivity, specificity, F-measure, Mathews-correlation-coefficient and area under receiver operating characteristic curve (AUC). Results: In total, 6,034 records (cases and controls) met eligibility for inclusion in the main dataset. After feature selection, the variables retained in the subsets were: All Pneumonia (n = 29 variables), CAP-HCAP (n = 26 variables); VAP-HAP (n = 40 variables) and AP (n = 37 variables), respectively. Variables retained (n = 22) were common across all four pneumonia subsets. Of these, the number of missing teeth, periodontal status, periodontal pocket depth more than 5 mm and number of restored teeth contributed to all the subsets and were retained in the model. MLP outperformed other predictive models for All Pneumonia, CAP-HCAP and AP subsets, while SVM outperformed other models in VAP-HAP subset. Conclusion: This study validates previously described associations between poor oral health and pneumonia. Benefits of an integrated medical-dental record and care delivery environment for modeling pneumonia risk are highlighted. Based on findings, risk score development could inform referrals and follow-up in integrated healthcare delivery environment and coordinated patient management.

19.
Curr Protoc ; 2(11): e603, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36441943

ABSTRACT

Genome-wide association studies (GWAS) are being conducted at an unprecedented rate in population-based cohorts and have increased our understanding of the pathophysiology of many complex diseases. Regardless of the context, the practical utility of this information ultimately depends upon the quality of the data used for statistical analyses. Quality control (QC) procedures for GWAS are constantly evolving. Here, we enumerate some of the challenges in QC of genotyped GWAS data and describe the approaches involving genotype imputation of a sample dataset along with post-imputation quality assurance, thereby minimizing potential bias and error in GWAS results. We discuss common issues associated with QC of the GWAS data (genotyped and imputed), including data file formats, software packages for data manipulation and analysis, sex chromosome anomalies, sample identity, sample relatedness, population substructure, batch effects, and marker quality. We provide detailed guidelines along with a sample dataset to suggest current best practices and discuss areas of ongoing and future research. © 2022 Wiley Periodicals LLC.


Subject(s)
Genome-Wide Association Study , Research Design , Humans , Quality Control , Genotype , Sex Chromosome Aberrations
20.
Prim Care Diabetes ; 17(1): 43-47, 2023 02.
Article in English | MEDLINE | ID: mdl-36437216

ABSTRACT

AIMS: To identify substance use disorder (SUD) patterns and their association with T2DM health outcomes among patients with type 2 diabetes and hypertension. METHODS: We used latent class analysis on electronic health records from the MetroHealth System (Cleveland, Ohio) to obtain the target SUD groups: i) only tobacco (TUD), ii) tobacco and alcohol (TAUD), and iii) tobacco, alcohol, and at least one more substance (PSUD). A matching program with Mahalanobis distance within propensity score calipers created the matched control groups: no SUD (NSUD) for TUD and TUD for the other two SUD groups. The numbers of participants for the target-control groups were 8009 (TUD), 1672 (TAUD), and 642 (PSUD). RESULTS: TUD was significantly associated with T2DM complications. Compared to TUD, the TAUD group showed a significantly higher likelihood for all-cause mortality (adjusted odds ratio (aOR) = 1.46) but not for any of the T2DM complications. Compared to TUD, the PSUD group experienced a significantly higher risk for cerebrovascular accident (CVA) (aOR = 2.19), diabetic neuropathy (aOR = 1.76), myocardial infarction (MI) (aOR = 1.76), and all-cause mortality (aOR = 1.66). CONCLUSIONS: The findings of increased risk associated with PSUDs may provide insights for better management of patients with T2DM and hypertension co-occurrence.


Subject(s)
Diabetes Mellitus, Type 2 , Hypertension , Substance-Related Disorders , Tobacco Use Disorder , Humans , Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/epidemiology , Diabetes Mellitus, Type 2/complications , Tobacco Use Disorder/complications , Electronic Health Records , Substance-Related Disorders/diagnosis , Substance-Related Disorders/epidemiology , Substance-Related Disorders/complications , Hypertension/diagnosis , Hypertension/epidemiology , Outcome Assessment, Health Care
SELECTION OF CITATIONS
SEARCH DETAIL